15 Dynamic Programming

Algorithmics 2025

Dynamic Programming

Invented by Richard Bellman in the 1950s.

An algorithm design technique

Combines brute force and greedy ideas:

  • Recursion + dictionary (top-down memoization)
  • Iteration (bottom-up tabulation)

Dynamic Programming (definition)

An algorithm design technique for problems that have:

  • overlapping subproblems (the same subproblems recur), and
  • optimal substructure (an optimal solution is built from optimal subsolutions).

Key idea: solve each subproblem once, store the result in a table or dictionary, and reuse it to avoid recomputation.

Two main styles

Top-down (memoization): - write the natural recursive solution - add a memo table to cache results as they are computed

Bottom-up (tabulation): - identify the subproblem order from smallest to largest - fill a table iteratively until the target answer is reached

Example: Fibonacci

Definition:
F(0)=0, F(1)=1, F(n)=F(n−1)+F(n−2)

Naive recursion (not DP)

function Fib(n):
  if n <= 1: return n
  return Fib(n-1) + Fib(n-2)
  1. What is the time complexity of this function?
  • \(TO(2^n)\)
  1. Draw the recursion tree for Fib(5)

Recursive Tree

graph TD
    A["fib(5)"]
    A --> B["fib(4)"]
    A --> C["fib(3)"]

    B --> D["fib(3)"]
    B --> E["fib(2)"]

    D --> F["fib(2)"]
    D --> G["fib(1)"]

    E --> H["fib(1)"]
    E --> I["fib(0)"]

    F --> J["fib(1)"]
    F --> K["fib(0)"]

    C --> L["fib(2)"]
    C --> M["fib(1)"]

    L --> N["fib(1)"]
    L --> O["fib(0)"]  

Naive recursion (not DP)

function Fib(n):
  if n <= 1: return n
  return Fib(n-1) + Fib(n-2)

This function is inefficient because it repeats the same calculation many times.

  1. How many redundant function calls are made when fib(5) is called?

Redundant calls

graph TD
    A["fib(5)"]
    A --> B["fib(4)"]
    A --> C["fib(3)"]

    B --> D["fib(3)"]
    B --> E["fib(2)"]

    D --> F["fib(2)"]
    D --> G["fib(1)"]

    E --> H["fib(1)"]
    E --> I["fib(0)"]

    F --> J["fib(1)"]
    F --> K["fib(0)"]

    C --> L["fib(2)"]
    C --> M["fib(1)"]

    L --> N["fib(1)"]
    L --> O["fib(0)"]

    style G fill:#fdd,stroke:#b00   
    style E fill:#fdd,stroke:#b00 
    style H fill:#fdd,stroke:#b00 
    style I fill:#fdd,stroke:#b00 
    style C fill:#fdd,stroke:#b00  
    style L fill:#fdd,stroke:#b00 
    style M fill:#fdd,stroke:#b00 
    style N fill:#fdd,stroke:#b00   
    style O fill:#fdd,stroke:#b00   

  • 9 redundant calls

Naive recursion

function Fib(n):
  if n <= 1: return n
  return Fib(n-1) + Fib(n-2)

Recursion with Memoisation

memo{}
function Fib(n):
  if n in memo: return memo[n]
  if n <= 1: return n
  memo[n] = Fib(n-1) + Fib(n-2)
return memo[n]

Recursion with Memoisation

memo = {0: 0, 1: 1}

function FibM(n):
  if n in memo: return memo[n]
  memo[n] = FibM(n-1) + FibM(n-2)
  return memo[n]
  1. what is the time complexity of FibM
  • Time O(n), space O(n) for memo + recursion stack.

Iteration Bottom UP

function FibTable(n):
  dp[0] = 0
  dp[1] = 1
  for i from 2 to n:
    dp[i] = dp[i-1] + dp[i-2]
  return dp[n]
  • Stores all values F(0)…F(n).
  • Time O(n), space O(n).

Optimised bottom-up

The “table” doesn’t have to be an array — it can be a dictionary, or even just a couple of variables, as long as you’re storing and reusing subproblem results.

function FibOptimised(n):
  if n <= 1: return n
  a = 0; b = 1      # F(0), F(1)
  for i from 2 to n:
    temp = a
    a = b
    b = temp + b
  return b
  • Stores only two values: F(i−2) and F(i−1).
  • Time O(n), space O(1).
  • Same logic, just no big table.

Bottom-up and DP

  • Bottom-up DP:

    • For problems with overlapping subproblems + optimal substructure
    • Builds solutions from small subproblems up to the final answer
    • Stores results (full table or compressed version) → avoids recomputation
    • Inherently dynamic programming
  • But not all bottom-up algorithms are DP

    • Example: Insertion sort

      • Builds sorted array one element at a time
      • No reuse of subproblem results → not DP